30 research outputs found

    APIR-Net: Autocalibrated Parallel Imaging Reconstruction using a Neural Network

    Full text link
    Deep learning has been successfully demonstrated in MRI reconstruction of accelerated acquisitions. However, its dependence on representative training data limits the application across different contrasts, anatomies, or image sizes. To address this limitation, we propose an unsupervised, auto-calibrated k-space completion method, based on a uniquely designed neural network that reconstructs the full k-space from an undersampled k-space, exploiting the redundancy among the multiple channels in the receive coil in a parallel imaging acquisition. To achieve this, contrary to common convolutional network approaches, the proposed network has a decreasing number of feature maps of constant size. In contrast to conventional parallel imaging methods such as GRAPPA that estimate the prediction kernel from the fully sampled autocalibration signals in a linear way, our method is able to learn nonlinear relations between sampled and unsampled positions in k-space. The proposed method was compared to the start-of-the-art ESPIRiT and RAKI methods in terms of noise amplification and visual image quality in both phantom and in-vivo experiments. The experiments indicate that APIR-Net provides a promising alternative to the conventional parallel imaging methods, and results in improved image quality especially for low SNR acquisitions.Comment: To appear in the proceedings of MICCAI 2019 Workshop Machine Learning for Medical Image Reconstructio

    CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions

    Get PDF
    Cardiac CINE magnetic resonance imaging is the gold-standard for the assessment of cardiac function. Imaging accelerations have shown to enable 3D CINE with left ventricular (LV) coverage in a single breath-hold. However, 3D imaging remains limited to anisotropic resolution and long reconstruction times. Recently deep learning has shown promising results for computationally efficient reconstructions of highly accelerated 2D CINE imaging. In this work, we propose a novel 4D (3D + time) deep learning-based reconstruction network, termed 4D CINENet, for prospectively undersampled 3D Cartesian CINE imaging. CINENet is based on (3 + 1)D complex-valued spatio-temporal convolutions and multi-coil data processing. We trained and evaluated the proposed CINENet on in-house acquired 3D CINE data of 20 healthy subjects and 15 patients with suspected cardiovascular disease. The proposed CINENet network outperforms iterative reconstructions in visual image quality and contrast (+ 67% improvement). We found good agreement in LV function (bias ± 95% confidence) in terms of end-systolic volume (0 ± 3.3 ml), end-diastolic volume (- 0.4 ± 2.0 ml) and ejection fraction (0.1 ± 3.2%) compared to clinical gold-standard 2D CINE, enabling single breath-hold isotropic 3D CINE in less than 10 s scan and ~ 5 s reconstruction time

    Rapid estimation of 2D relative B(1)(+)-maps from localizers in the human heart at 7T using deep learning

    Get PDF
    PURPOSE: Subject-tailored parallel transmission pulses for ultra-high fields body applications are typically calculated based on subject-specific B(1)(+)-maps of all transmit channels, which require lengthy adjustment times. This study investigates the feasibility of using deep learning to estimate complex, channel-wise, relative 2D B(1)(+)-maps from a single gradient echo localizer to overcome long calibration times. METHODS: 126 channel-wise, complex, relative 2D B(1)(+)-maps of the human heart from 44 subjects were acquired at 7T using a Cartesian, cardiac gradient-echo sequence obtained under breath-hold to create a library for network training and cross-validation. The deep learning predicted maps were qualitatively compared to the ground truth. Phase-only B(1)(+)-shimming was subsequently performed on the estimated B(1)(+)-maps for a region of interest covering the heart. The proposed network was applied at 7T to 3 unseen test subjects. RESULTS: The deep learning-based B(1)(+)-maps, derived in approximately 0.2 seconds, match the ground truth for the magnitude and phase. The static, phase-only pulse design performs best when maximizing the mean transmission efficiency. In-vivo application of the proposed network to unseen subjects demonstrates the feasibility of this approach: the network yields predicted B(1)(+)-maps comparable to the acquired ground truth and anatomical scans reflect the resulting B(1)(+)-pattern using the deep learning-based maps. CONCLUSION: The feasibility of estimating 2D relative B(1)(+)-maps from initial localizer scans of the human heart at 7T using deep learning is successfully demonstrated. Because the technique requires only sub-seconds to derive channel-wise B(1)(+)-maps, it offers high potential for advancing clinical body imaging at ultra-high fields

    Deep Learning Formulation of ECGI for Data-driven Integration of Spatiotemporal Correlations and Imaging Information

    Get PDF
    International audienceThe challenge of non-invasive Electrocardiographic Imaging (ECGI) is to recreate the electrical activity of the heart using body surface potentials. Specifically, there are numerical difficulties due to the ill-posed nature of the problem. We propose a novel method based on Conditional Variational Autoencoders using Deep generative Neural Networks to overcome this challenge. By conditioning the electrical activity on heart shape and electrical potentials, our model is able to generate activation maps with good accuracy on simulated data (mean square error, MSE = 0.095). This method differs from other formulations because it naturally takes into account spatio-temporal correlations as well as the imaging substrate through convolutions and conditioning. We believe these features can help improving ECGI results

    Solving Phase Retrieval with a Learned Reference

    Full text link
    Fourier phase retrieval is a classical problem that deals with the recovery of an image from the amplitude measurements of its Fourier coefficients. Conventional methods solve this problem via iterative (alternating) minimization by leveraging some prior knowledge about the structure of the unknown image. The inherent ambiguities about shift and flip in the Fourier measurements make this problem especially difficult; and most of the existing methods use several random restarts with different permutations. In this paper, we assume that a known (learned) reference is added to the signal before capturing the Fourier amplitude measurements. Our method is inspired by the principle of adding a reference signal in holography. To recover the signal, we implement an iterative phase retrieval method as an unrolled network. Then we use back propagation to learn the reference that provides us the best reconstruction for a fixed number of phase retrieval iterations. We performed a number of simulations on a variety of datasets under different conditions and found that our proposed method for phase retrieval via unrolled network and learned reference provides near-perfect recovery at fixed (small) computational cost. We compared our method with standard Fourier phase retrieval methods and observed significant performance enhancement using the learned reference.Comment: Accepted to ECCV 2020. Code is available at https://github.com/CSIPlab/learnPR_referenc

    Single-heartbeat cardiac cine imaging via jointly regularized non-rigid motion corrected reconstruction

    Get PDF
    PURPOSE: Develop a novel approach for 2D breath-hold cardiac cine from a single heartbeat, by combining cardiac motion corrected reconstructions and non-rigidly aligned patch-based regularization. METHODS: Conventional cardiac cine imaging is obtained via motion resolved reconstructions of data acquired over multiple heartbeats. Here, we achieve single-heartbeat cine imaging by incorporating non-rigid cardiac motion correction into the reconstruction of each cardiac phase, in conjunction with a motion-aligned patch-based regularization. The proposed Motion Corrected CINE (MC-CINE) incorporates all acquired data into the reconstruction of each (motion corrected) cardiac phase, resulting in a better posed problem than motion resolved approaches. MC-CINE was compared to iterative SENSE and XD-GRASP in fourteen healthy subjects in terms of image sharpness, reader scoring (1-5 range) and reader ranking (1-9 range) of image quality, and single-slice left ventricular assessment. RESULTS: MC-CINE was significantly superior to both iterative SENSE and XD-GRASP using 20, 2 and 1 heartbeat(s). Iterative SENSE, XD-GRASP and MC-CINE achieved sharpness of 74%, 74% and 82% using 20 heartbeats, and 53%, 66% and 82% with 1 heartbeat, respectively. Corresponding results for reader scores were 4.0, 4.7 and 4.9, with 20 heartbeats, and 1.1, 3.0 and 3.9 with 1 heartbeat. Corresponding results for reader rankings were 5.3, 7.3 and 8.6 with 20 heartbeats, and 1.0, 3.2 and 5.4 with 1 heartbeat. MC-CINE using a single heartbeat presented non-significant differences in image quality to iterative SENSE with 20 heartbeats. MC-CINE and XD-GRASP at one heartbeat both presented a non-significant negative bias of <2% in ejection fraction relative to the reference iterative SENSE. CONCLUSION: The proposed MC-CINE significantly improves image quality relative to iterative SENSE and XD-GRASP, enabling 2D cine from a single heartbeat
    corecore